多模式面向任务的对话框系统的文本响应生成旨在在给定多模式上下文的情况下生成适当的文本响应,这是一项必不可少但具有挑战性的任务。尽管现有的努力取得了令人信服的成功,但他们仍然遭受了两个关键的局限性:1)忽略生成预训练的好处,以及2)忽略与文本上下文相关的知识。为了解决这些局限性,我们为多模式的以任务为导向的对话框系统(DKMD)提出了一种新颖的双重知识增强的生成预验证的语言模型,由三个关键组成部分组成:双重知识选择,双重知识增强上下文上下文学习和知识增强的响应响应一代。具体来说,双重知识选择组件旨在根据给定上下文的文本和视觉方式选择相关的知识。此后,双重知识增强的上下文学习组件是从全球和局部观点上都无缝地将所选知识整合到多模式上下文的学习中,并探索了跨模式的语义关系。此外,知识增强的响应生成部分包括经过修订的Bart解码器,其中引入了其他点产品知识折线,以明确利用知识来推进文本响应生成。公共数据集的广泛实验验证了拟议的DKMD优于最先进的竞争对手。
translated by 谷歌翻译
优化平均精度(AP)的近似已被广泛研究图像检索。受AP的定义有限,这些方法考虑在每个阳性实例之前的负数和正面情况。但是,我们声称只在积极的情况下惩罚负面情况,因为损失只来自这些负面情况。为此,我们提出了一种新的损失,即惩罚正面(PNP)的负面情况,这可以直接最小化每个正面前的负实例的数量。此外,基于AP的方法采用固定和次优梯度分配策略。因此,我们通过构建损耗的衍生功能来系统地调查不同的梯度分配解决方案,导致PNP-I具有增加的衍生函数和PNP-D,其具有减小的函数。 PNP-I通过为它们分配更大的渐变并尝试使所有相关实例更近的较大渐变来重点缩影。相比之下,PNP-D对此类实例的关注不那么注意,并慢慢纠正它们。对于大多数真实世界的数据,一类通常包含几个本地群集。 PNP-我盲目地聚集了这些群集,而PNP-D保持它们。因此,PNP-D更优越。三个标准检索数据集的实验显示了上述分析的一致结果。广泛的评估表明PNP-D实现了最先进的性能。代码在https://github.com/interestingzhuo/pnp_loss获得
translated by 谷歌翻译
光学流量估计是视频分析领域的一个重要而有挑战性问题。卷积神经网络的不同语义级别/层的特征可以提供不同粒度的信息。为了利用如此灵活和全面的信息,我们提出了一个半监督的特征金字塔形相关和残余重建网络(FPCR-Net),用于框架对的光学流量估计。它由两个主要模块组成:金字塔相关映射和剩余重建。金字塔相关映射模块利用全局/本地补丁的多尺度相关性来通过聚合不同尺度的特征来形成多级成本卷。剩余重建模块旨在重建每个阶段中更精细的光学流的子带高频残差。基于金字塔相关映射,我们进一步提出了相关 - 扭曲 - 归一化(CWN)模块,以有效地利用相关性依赖性。实验结果表明,该方案在针对竞争基线方法的平均终点误差(AEE)方面,实现了最先进的性能,改善了0.80,1.15和0.10 - Flownet2,LiteFlowNet和PWC-Net Sintel DataSet的最终通过。
translated by 谷歌翻译
Audio-visual approaches involving visual inputs have laid the foundation for recent progress in speech separation. However, the optimization of the concurrent usage of auditory and visual inputs is still an active research area. Inspired by the cortico-thalamo-cortical circuit, in which the sensory processing mechanisms of different modalities modulate one another via the non-lemniscal sensory thalamus, we propose a novel cortico-thalamo-cortical neural network (CTCNet) for audio-visual speech separation (AVSS). First, the CTCNet learns hierarchical auditory and visual representations in a bottom-up manner in separate auditory and visual subnetworks, mimicking the functions of the auditory and visual cortical areas. Then, inspired by the large number of connections between cortical regions and the thalamus, the model fuses the auditory and visual information in a thalamic subnetwork through top-down connections. Finally, the model transmits this fused information back to the auditory and visual subnetworks, and the above process is repeated several times. The results of experiments on three speech separation benchmark datasets show that CTCNet remarkably outperforms existing AVSS methods with considerablely fewer parameters. These results suggest that mimicking the anatomical connectome of the mammalian brain has great potential for advancing the development of deep neural networks. Project repo is https://github.com/JusperLee/CTCNet.
translated by 谷歌翻译
This is a brief technical report of our proposed method for Multiple-Object Tracking (MOT) Challenge in Complex Environments. In this paper, we treat the MOT task as a two-stage task including human detection and trajectory matching. Specifically, we designed an improved human detector and associated most of detection to guarantee the integrity of the motion trajectory. We also propose a location-wise matching matrix to obtain more accurate trace matching. Without any model merging, our method achieves 66.672 HOTA and 93.971 MOTA on the DanceTrack challenge dataset.
translated by 谷歌翻译
Adversarial attacks can easily fool object recognition systems based on deep neural networks (DNNs). Although many defense methods have been proposed in recent years, most of them can still be adaptively evaded. One reason for the weak adversarial robustness may be that DNNs are only supervised by category labels and do not have part-based inductive bias like the recognition process of humans. Inspired by a well-known theory in cognitive psychology -- recognition-by-components, we propose a novel object recognition model ROCK (Recognizing Object by Components with human prior Knowledge). It first segments parts of objects from images, then scores part segmentation results with predefined human prior knowledge, and finally outputs prediction based on the scores. The first stage of ROCK corresponds to the process of decomposing objects into parts in human vision. The second stage corresponds to the decision process of the human brain. ROCK shows better robustness than classical recognition models across various attack settings. These results encourage researchers to rethink the rationality of currently widely-used DNN-based object recognition models and explore the potential of part-based models, once important but recently ignored, for improving robustness.
translated by 谷歌翻译
It is well believed that the higher uncertainty in a word of the caption, the more inter-correlated context information is required to determine it. However, current image captioning methods usually consider the generation of all words in a sentence sequentially and equally. In this paper, we propose an uncertainty-aware image captioning framework, which parallelly and iteratively operates insertion of discontinuous candidate words between existing words from easy to difficult until converged. We hypothesize that high-uncertainty words in a sentence need more prior information to make a correct decision and should be produced at a later stage. The resulting non-autoregressive hierarchy makes the caption generation explainable and intuitive. Specifically, we utilize an image-conditioned bag-of-word model to measure the word uncertainty and apply a dynamic programming algorithm to construct the training pairs. During inference, we devise an uncertainty-adaptive parallel beam search technique that yields an empirically logarithmic time complexity. Extensive experiments on the MS COCO benchmark reveal that our approach outperforms the strong baseline and related methods on both captioning quality as well as decoding speed.
translated by 谷歌翻译
Recently, unsupervised learning has made impressive progress on various tasks. Despite the dominance of discriminative models, increasing attention is drawn to representations learned by generative models and in particular, Generative Adversarial Networks (GANs). Previous works on the interpretation of GANs reveal that GANs encode semantics in feature maps in a linearly separable form. In this work, we further find that GAN's features can be well clustered with the linear separability assumption. We propose a novel clustering algorithm, named KLiSH, which leverages the linear separability to cluster GAN's features. KLiSH succeeds in extracting fine-grained semantics of GANs trained on datasets of various objects, e.g., car, portrait, animals, and so on. With KLiSH, we can sample images from GANs along with their segmentation masks and synthesize paired image-segmentation datasets. Using the synthesized datasets, we enable two downstream applications. First, we train semantic segmentation networks on these datasets and test them on real images, realizing unsupervised semantic segmentation. Second, we train image-to-image translation networks on the synthesized datasets, enabling semantic-conditional image synthesis without human annotations.
translated by 谷歌翻译
For saving cost, many deep neural networks (DNNs) are trained on third-party datasets downloaded from internet, which enables attacker to implant backdoor into DNNs. In 2D domain, inherent structures of different image formats are similar. Hence, backdoor attack designed for one image format will suite for others. However, when it comes to 3D world, there is a huge disparity among different 3D data structures. As a result, backdoor pattern designed for one certain 3D data structure will be disable for other data structures of the same 3D scene. Therefore, this paper designs a uniform backdoor pattern: NRBdoor (Noisy Rotation Backdoor) which is able to adapt for heterogeneous 3D data structures. Specifically, we start from the unit rotation and then search for the optimal pattern by noise generation and selection process. The proposed NRBdoor is natural and imperceptible, since rotation is a common operation which usually contains noise due to both the miss match between a pair of points and the sensor calibration error for real-world 3D scene. Extensive experiments on 3D mesh and point cloud show that the proposed NRBdoor achieves state-of-the-art performance, with negligible shape variation.
translated by 谷歌翻译
Although substantial efforts have been made using graph neural networks (GNNs) for AI-driven drug discovery (AIDD), effective molecular representation learning remains an open challenge, especially in the case of insufficient labeled molecules. Recent studies suggest that big GNN models pre-trained by self-supervised learning on unlabeled datasets enable better transfer performance in downstream molecular property prediction tasks. However, they often require large-scale datasets and considerable computational resources, which is time-consuming, computationally expensive, and environmentally unfriendly. To alleviate these limitations, we propose a novel pre-training model for molecular representation learning, Bi-branch Masked Graph Transformer Autoencoder (BatmanNet). BatmanNet features two tailored and complementary graph autoencoders to reconstruct the missing nodes and edges from a masked molecular graph. To our surprise, BatmanNet discovered that the highly masked proportion (60%) of the atoms and bonds achieved the best performance. We further propose an asymmetric graph-based encoder-decoder architecture for either nodes and edges, where a transformer-based encoder only takes the visible subset of nodes or edges, and a lightweight decoder reconstructs the original molecule from the latent representation and mask tokens. With this simple yet effective asymmetrical design, our BatmanNet can learn efficiently even from a much smaller-scale unlabeled molecular dataset to capture the underlying structural and semantic information, overcoming a major limitation of current deep neural networks for molecular representation learning. For instance, using only 250K unlabelled molecules as pre-training data, our BatmanNet with 2.575M parameters achieves a 0.5% improvement on the average AUC compared with the current state-of-the-art method with 100M parameters pre-trained on 11M molecules.
translated by 谷歌翻译